Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AutoModel supports FA2/paged attention #2133

Closed
wants to merge 4 commits into from

Conversation

fxmarty
Copy link
Contributor

@fxmarty fxmarty commented Jun 27, 2024

As per title.

The models benefiting from it in Transformers are:

  • cohere
  • dbrx
  • gemma
  • llama
  • jetmoe
  • mistral
  • mixtral
  • olmo
  • phi
  • phi3
  • qwen2
  • qwen2_moe
  • stablelm
  • starcoder2

i.e. all models with supports_cache_class = True and _supports_flash_attn_2 = True, following huggingface/transformers#31446 and some more changes in Transformers needed to support a single dim for total sequence length [total_seqlen, hidden_size].

models/__init__.py is kind of bloated and I guess is going to be refactored with the upcoming TRT-LLM support / multi backend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants